DiNNO: Distributed Neural Network Optimization for Multi-Robot Collaborative Learning

نویسندگان

چکیده

We present DiNNO, a distributed algorithm that enables group of robots to collaboratively optimize deep neural network model while communicating over mesh network. Each robot only has access its own data and maintains version the network, but eventually learns is as good if it had been trained on all centrally. No sends raw wireless preserving privacy ensuring efficient use bandwidth. At each iteration, approximately optimizes an augmented Lagrangian function, then communicates resulting weights neighbors, updates dual variables, repeats. Eventually, robots’ local reach consensus. For convex objective functions, this consensus global optimum. Unlike many existing methods we test our robotics-related, learning tasks with nontrivial architectures. compare DiNNO two benchmark algorithms in (i) MNIST image classification task, (ii) multi-robot implicit mapping (iii) reinforcement task. In these experiments show performs well when faced nonconvex objectives, time-varying communication graphs, streaming data. method outperforms baselines, was able achieve validation loss equivalent centrally models. See msl.stanford.edu/projects/dist_nn_train for videos code.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Distributed Multi-Robot Learning using Particle Swarm Optimization

This thesis studies the automatic design and optimization of high-performing robust controllers for mobile robots using exclusively on-board resources. Due to the often large parameter space and noisy performance metrics, this constitutes an expensive optimization problem. Population-based learning techniques have been proven to be effective in dealing with noise and are thus promising tools to...

متن کامل

Private Collaborative Neural Network Learning

Machine learning algorithms, such as neural networks, create better predictive models when having access to larger datasets. In many domains, such as medicine and finance, each institute has only access to limited amounts of data, and creating larger datasets typically requires collaboration. However, there are privacy related constraints on these collaborations for legal, ethical, and competit...

متن کامل

Collaborative Agent Teams (CAT) for Distributed Multi-Dimensional Optimization

We present a metaheuristic optimization framework based on a Collaborative Agent Teams (CAT) architecture to tackle large-scale mixed-integer optimization problems with complex structures. This framework introduces several conceptual improvements over previous agent teams approaches. We discuss how to configure the three key components of a CAT solver for a particular multidimensional optimizat...

متن کامل

Distributed ARTMAP: a neural network for fast distributed supervised learning

Distributed coding at the hidden layer of a multi-layer perceptron (MLP) endows the network with memory compression and noise tolerance capabilities. However, an MLP typically requires slow off-line learning to avoid catastrophic forgetting in an open input environment. An adaptive resonance theory (ART) model is designed to guarantee stable memories even with fast on-line learning. However, AR...

متن کامل

An Unsupervised Learning Method for an Attacker Agent in Robot Soccer Competitions Based on the Kohonen Neural Network

RoboCup competition as a great test-bed, has turned to a worldwide popular domains in recent years. The main object of such competitions is to deal with complex behavior of systems whichconsist of multiple autonomous agents. The rich experience of human soccer player can be used as a valuable reference for a robot soccer player. However, because of the differences between real and simulated soc...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE robotics and automation letters

سال: 2022

ISSN: ['2377-3766']

DOI: https://doi.org/10.1109/lra.2022.3142402